skip to main content


Search for: All records

Creators/Authors contains: "Blanchet, Jose"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We prove a sample-path large deviation principle (LDP) with sublinear speed for unbounded functionals of certain Markov chains induced by the Lindley recursion. The LDP holds in the Skorokhod space [Formula: see text] equipped with the [Formula: see text] topology. Our technique hinges on a suitable decomposition of the Markov chain in terms of regeneration cycles. Each regeneration cycle denotes the area accumulated during the busy period of the reflected random walk. We prove a large deviation principle for the area under the busy period of the Markov random walk, and we show that it exhibits a heavy-tailed behavior.

    Funding: The research of B. Zwart and M. Bazhba is supported by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek [Grant 639.033.413]. The research of J. Blanchet is supported by the National Science Foundation (NSF) [Grants 1915967, 1820942, and 1838576] as well as the Defense Advanced Research Projects Agency [Grant N660011824028]. The research of C.-H. Rhee is supported by the NSF [Grant CMMI-2146530].

     
    more » « less
    Free, publicly-accessible full text available March 27, 2025
  2. We provide faster algorithms for approximating the optimal transport distance, e.g. earth mover's distance, between two discrete probability distributions on n elements. We present two algorithms that compute couplings between marginal distributions with an expected transportation cost that is within an additive ϵ of optimal in time O(n^2/eps); one algorithm is straightforward to parallelize and implementable in depth O(1/eps). Further, we show that additional improvements on our results must be coupled with breakthroughs in algorithmic graph theory. 
    more » « less
    Free, publicly-accessible full text available January 1, 2025
  3. This paper shows that dropout training in generalized linear models is the minimax solution of a two-player, zero-sum game where an adversarial nature corrupts a statistician's covariates using a multiplicative nonparametric errors-in-variables model. In this game, nature's least favorable distribution is dropout noise, where nature independently deletes entries of the covariate vector with some fixed probability δ. This result implies that dropout training indeed provides out-of-sample expected loss guarantees for distributions that arise from multiplicative perturbations of in-sample data. The paper makes a concrete recommendation on how to select the tuning parameter δ. The paper also provides a novel, parallelizable, unbiased multi-level Monte Carlo algorithm to speed-up the implementation of dropout training. Our algorithm has a much smaller computational cost compared to the naive implementation of dropout, provided the number of data points is much smaller than the dimension of the covariate vector. 
    more » « less
    Free, publicly-accessible full text available June 1, 2024
  4. The goal of this paper is to develop a methodology for the systematic analysis of asymptotic statistical properties of data-driven DRO formulations based on their corresponding non-DRO counterparts. We illustrate our approach in various settings, including both phidivergence and Wasserstein uncertainty sets. Different types of asymptotic behaviors are obtained depending on the rate at which the uncertainty radius decreases to zero as a function of the sample size and the geometry of the uncertainty sets. 
    more » « less
  5. The utility of a match in a two-sided matching market often depends on a variety of characteristics of the two agents (e.g., a buyer and a seller) to be matched. In contrast to the matching market literature, this utility may best be modeled by a general matching utility distribution. In “Asymptotically Optimal Control of a Centralized Dynamic Matching Market with General Utilities,” Blanchet, Reiman, Shah, Wein, and Wu consider general matching utilities in the context of a centralized dynamic matching market. To analyze this difficult problem, they combine two asymptotic techniques: extreme value theory (and regularly varying functions) and fluid asymptotics of queueing systems. A key trade-off in this problem is market thickness: Do we myopically make the best match that is currently available, or do we allow the market to thicken in the hope of making a better match in the future while avoiding agent abandonment? Their asymptotic analysis derives quite explicit results for this problem and reveals how the optimal amount of market thickness increases with the right tail of the matching utility distribution and the amount of market imbalance. Their use of regularly varying functions also allows them to consider correlated matching utilities (e.g., buyers have positively correlated utilities with a given seller), which is ubiquitous in matching markets.

     
    more » « less
  6. We study the problem of transfer learning, observing that previous efforts to understand its information-theoretic limits do not fully exploit the geometric structure of the source and target domains. In contrast, our study first illustrates the benefits of incorporating a natural geometric structure within a linear regression model, which corresponds to the generalized eigenvalue problem formed by the Gram matrices of both domains. We next establish a finite-sample minimax lower bound, propose a refined model interpolation estimator that enjoys a matching upper bound, and then extend our framework to multiple source domains and generalized linear models. Surprisingly, as long as information is available on the distance between the source and target parameters, negative-transfer does not occur. Simulation studies show that our proposed interpolation estimator outperforms state-of-the-art transfer learning methods in both moderate- and high-dimensional settings. 
    more » « less
  7. Distributionally robust optimization (DRO) has been shown to offer a principled way to regularize learning models. In this paper, we find that Tikhonov regularization is distributionally robust in an optimal transport sense (i.e. if an adversary chooses distributions in a suitable optimal transport neighborhood of the empirical measure), provided that suitable martingale constraints are also imposed. Further, we introduce a relaxation of the martingale constraints which not only provide a unified viewpoint to a class of existing robust methods but also lead to new regularization tools. To realize these novel tools, provably efficient computational algorithms are proposed. As a byproduct, the strong duality theorem proved in this paper can be potentially applied to other problems of independent interest. 
    more » « less
  8. We consider optimal transport-based distributionally robust optimization (DRO) problems with locally strongly convex transport cost functions and affine decision rules. Under conventional convexity assumptions on the underlying loss function, we obtain structural results about the value function, the optimal policy, and the worst-case optimal transport adversarial model. These results expose a rich structure embedded in the DRO problem (e.g., strong convexity even if the non-DRO problem is not strongly convex, a suitable scaling of the Lagrangian for the DRO constraint, etc., which are crucial for the design of efficient algorithms). As a consequence of these results, one can develop efficient optimization procedures that have the same sample and iteration complexity as a natural non-DRO benchmark algorithm, such as stochastic gradient descent.

     
    more » « less
  9. Chaudhuri, Kamalika ; Jegelka, Stefanie ; Song, Le ; Szepesvari, Csaba ; Niu, Gang ; Sabato, Sivan (Ed.)
    Reinforcement learning (RL) has demonstrated remarkable achievements in simulated environments. However, carrying this success to real environments requires the important attribute of robustness, which the existing RL algorithms often lack as they assume that the future deployment environment is the same as the training environment (i.e. simulator) in which the policy is learned. This assumption often does not hold due to the discrepancy between the simulator and the real environment and, as a result, and hence renders the learned policy fragile when deployed. In this paper, we propose a novel distributionally robust Q-learning algorithm that learns the best policy in the worst distributional perturbation of the environment. Our algorithm first transforms the infinite-dimensional learning problem (since the environment MDP perturbation lies in an infinite-dimensional space) into a finite-dimensional dual problem and subsequently uses a multi-level Monte-Carlo scheme to approximate the dual value using samples from the simulator. Despite the complexity, we show that the resulting distributionally robust Q-learning algorithm asymptotically converges to optimal worst-case policy, thus making it robust to future environment changes. Simulation results further demonstrate its strong empirical robustness. 
    more » « less